Navigating the New Frontier - A Charity's Guide to Data Security in the Age of Generative AI
This report has been prepared by Keith Taynton at autoSentia and outlines the opportunities and significant data security, privacy, and ethical challenges that Large Language Models (LLMs) present to the charity sector. While LLMs offer exciting potential for enhancing efficiency in areas like content creation, administration, and research, their adoption introduces complex risks that require proactive management.
LLMs represent a new and serious threat vector, distinct from traditional IT security concerns. Their complexity and the vast datasets they process create new attack surfaces. Key risks include:
- Prompt Injection: Attackers can manipulate LLMs through crafted inputs, potentially leading to data leaks, misinformation spread, or unauthorized actions.
- Sensitive Data Exposure: Inputting confidential data (donor details, beneficiary information) into public LLMs risks this data being used for training, logged, or exposed in breaches, leading to GDPR non-compliance and loss of trust.
- Training Data Poisoning: Malicious data introduced during training can cause LLMs to exhibit biases, create backdoors, or generate harmful content.
- Insecure Output Handling: Unvalidated LLM outputs can introduce security vulnerabilities like XSS or SQL injection if used directly in other systems.
- Other Threats: Supply chain risks, excessive LLM permissions, model theft, and Denial of Service attacks are also concerns.
Meeting GDPR obligations is critical. Charities must identify a lawful basis for processing personal data with LLMs, ensure transparency with individuals, practice data minimisation, maintain accuracy, and be prepared to uphold data subject rights (access, erasure), which can be technically challenging with current LLM designs. Conducting Data Protection Impact Assessments (DPIAs) is likely mandatory for most LLM uses involving personal data, especially sensitive categories. The Information Commissioner's Office (ICO) is actively scrutinising AI use and has expressed concerns about transparency and the lawful basis for training data.
Beyond security, ethical considerations are paramount. LLMs can perpetuate algorithmic bias if trained on biased data, potentially leading to discriminatory service delivery or unfair decision-making. The "black box" nature of LLMs raises questions of accountability and transparency when errors occur. AI "hallucinations" pose a risk of spreading misinformation, damaging the charity's credibility. Addressing these ethical issues is crucial for safeguarding the charity's reputation and public trust.
Protecting Your Charity: Practical Steps
Protecting your charity requires practical steps:
- Develop an AI Governance and Usage Policy: Clearly define approved tools, acceptable use, strict rules for handling sensitive data, output verification requirements, and reporting procedures. Leadership must champion this policy.
- Implement Technical and Organisational Safeguards: Choose LLMs with privacy features, prioritise data minimisation and anonymisation, validate inputs and sanitise outputs where possible, implement robust access controls, and vet third-party providers carefully. Adopt a "Zero Trust" mindset.
- Empower Your Team with Training: Comprehensive, ongoing training for all staff and volunteers is non-negotiable. Cover basic LLM risks, the "never input this" rule for sensitive data, safe input practices, recognising misinformation, following the AI policy, and reporting concerns. Integrate this training into existing programmes and leverage external resources like the NCSC and ICO. Human oversight of LLM outputs is essential.
In conclusion, while LLMs offer significant potential, charities must approach their adoption with proactive vigilance. Understanding the unique risks, adhering to GDPR, addressing ethical considerations, implementing practical safeguards, and investing in continuous training are vital steps to harness the benefits of LLMs securely and responsibly, protecting both the organisation and the individuals it serves.
Ready to transform your team's AI capabilities?
Book a consultation to discuss how our expert-led training can help overcome hesitancy and build practical AI skills.
Request Your Consultation